The Question of AI

Transcript 


What if I told you the man who tried to save humanity from AI is the very reason it exists today? 


Elon Musk once called artificial intelligence the single greatest threat to mankind. But here's the twist. He helped create the most powerful AI company in the world, Open AI. And now he says that same company has betrayed him. This isn't just a story about technology. It's a story about power, betrayal, and control over the future of the human mind. 


Because somewhere between Musk's billion-dollar dreams and Silicon Valley's promises of helpful AI, something changed. Something the world wasn't supposed to notice. And what happened behind closed doors might explain why the man who warned us about AI is now building one to fight back. 


It didn't start in a boardroom.  It started in a living room. Two men, two billionaires sitting across from each other long after midnight. The conversation drifted, as it often did, toward the future, toward something most people couldn't even imagine yet. 


On one side was Elon Musk, the engineer obsessed with Mars, rockets, and the survival of humanity.  On the other was Larry Page, co-founder of Google, a man who believed in technology almost like a religion. They had been friends. Musk would stay at Larry's house whenever he visited Silicon Valley. They'd talk for hours about the limits of the human brain, about consciousness, about what happens when computers become smarter than people. 


Somewhere in those late night debates, a line was crossed. Musk spoke with urgency, warning that artificial intelligence could one day replace us, that once machines learned how to learn, there would be no going back. 


Larry laughed. To him, the idea that AI was dangerous was ridiculous, almost childish. He believed AI wouldn't destroy humanity, it would evolve beyond it. And then came the insult that changed everything. Larry called Elon a speciesist, someone who only cared about preserving humans, not progress. 


In other words, Elon wasn't thinking big enough. That word hit like a bullet. Because to Elon Musk, humanity wasn't just a species. It was everything. Our flaws, our dreams, our stories. All of it was worth protecting. And that's when something inside him snapped. 


He looked around and realized that Google, the company that now owned Deep Mind, the world's most advanced AI lab, had quietly cornered the future. They controlled 3/4ths of the global AI talent. They had endless servers, money, and computing power. And the people in charge didn't seem the least bit worried about what they were creating. 


To Musk, that was insanity. The people who were building the mind of the future didn't even believe it needed guardrails. That night, a seed was planted. A plan that would challenge Google's empire and maybe even fade itself. He decided to create a counterweight. A project that would belong to everyone. A system that couldn't be owned, bought, or corrupted. He would call it open AI. 


Its mission was simple, almost utopian. Make artificial intelligence open source. Make it safe. Make it for humanity. 


The irony is that Elon Musk never thought it would actually work. He admitted later that he believed the mission was probably hopeless. How could a handful of scientists compete with a trillion dollar company? But he also knew that not trying was worse than failing. So he began to gather allies, people he trusted, people brilliant enough to believe in the impossible. He convinced researchers, engineers, and visionaries to join him, promising a world where AI knowledge wouldn't be locked inside a corporate vault. And to prove he was serious, he did something very few billionaires ever do. He gave away his money with no strings attached. Around $50 million gone just like that. No profit, no equity, no control,  just faith in an idea. He helped choose the name, recruited the core scientists, and shaped the entire philosophy of the company. 


Open AI would be the light in an industry that was quickly slipping into darkness. It was a dream powered by idealism, the belief that transparency could keep humanity safe. But dreams are fragile things, especially when they collide with power. Because the moment open AI started to grow, the pressure began to build. And soon that $50 million dream would turn into the biggest regret of Elon Musk's life. 


They announced it quietly. A blog post, a few photos of smiling researchers, and a mission statement that sounded too good to be true. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole. It was the dawn of open AI, and the press ate it up. Headlines called it the AI revolution with a conscience. A nonprofit built not to dominate the world, but to save it. 


To the public, it looked like the good guys had finally entered the arena. But behind the scenes, something darker was already forming. Elon Musk envisioned Open AI as a kind of shield, a counterweight against the rising AI monopolies. He wanted a fortress of transparency that would keep the future open. But what he didn't know was that his fortress had already been infiltrated. 


Inside open AI, a quiet power struggle began almost immediately. The founders were brilliant but divided. There were idealists like Ilya Sutskever, a soft-spoken genius who saw AI as a mirror of human consciousness. And then there were pragmatists, people like Sam Altman, the former head of Y Combinator, young, sharp, and obsessed with scaling. 


Altman wasn't a scientist. He was a builder of empires. And he understood something Elon didn't. That power in Silicon Valley doesn't come from ideas. It comes from momentum. The faster you move, the more money you attract. The more money you attract, the more control you gain. And while Musk saw AI as a threat to humanity, Altman saw it as the greatest business opportunity in history. 


In the early days, the company ran like a commune. No shareholders, no secret agenda, just a handful of coders and researchers in a plain office powered by idealism and Elon's $50 million seed. But the deeper they went into the code, the more they realized something terrifying. 


Building safe, open AI wasn't just hard, it was expensive. Training the next generation of language models required supercomputers that cost tens of millions each. electricity bills that rivaled small nations. They needed data, hardware, and cloud power. The very things only the tech giants controlled. The dream of a nonprofit AI lab for humanity suddenly collided with the reality of trillion dollar competition. 


And that's when the offers started coming in. Google, Amazon, Microsoft, they all came knocking, offering partnerships, servers, resources. But everyone knew what those offers really meant. Ownership. Because in Silicon Valley, nothing comes free. Elon saw what was happening. He started warning them that the mission was slipping, that open AI was drifting toward the same forces it was built to resist. 


In private emails and meetings, he urged them to stay independent. He even proposed merging open AI with Tesla to secure its funding and keep its principles intact. But Sam Altman had other plans. He didn't want to be under Musk's control or anyone's. The idealism that once bound them together was starting to fracture. Open a I wasn't just a lab anymore. It was becoming a brand. The more Musk pushed for caution, the more Altman pushed for speed. And when Musk realized he couldn't steer the ship anymore, he walked away. 


He left behind his money, his mission, and the company he helped build. To the public, it looked like a clean break, but those who were there said otherwise. Some say Musk left because he feared they were on the wrong path. Others whisper that he was pushed out. Whatever the truth, one thing is certain. The day Elon Musk walked away, Open AI stopped being open. The nonprofit began to transform into something unrecognizable. A hybrid entity with one foot in idealism and the other in corporate ambition. And in that transformation, the first cracks appeared. The dream of open-source AI for humanity was being quietly replaced by something else. Something no one had voted for and no one could stop. 


A new kind of power was taking shape. Invisible, unelected, and hungry. For a while, they tried to keep the peace. Even after Elon left Open AI, he didn't turn his back on the mission. He still believed, or maybe hoped, that they would stay true to their promise to keep AI open, safe, and transparent. But something had shifted. Open AI wasn't the humble nonprofit it used to be. It was growing fast, faster than anyone expected. Inside their San Francisco  headquarters, a quiet transformation was underway. The open-source code bases  that once flowed freely began to close off. Research papers were suddenly being delayed, redacted, or not published at all. The name Open AI was starting to sound like a cruel joke, and Elon Musk could feel it. He began to ask questions. Why were they hiding their breakthroughs? Why were they now talking about monetization? And most importantly, who was really funding them? 


The answers came slowly,  and when they did, they hit like a thunderclap. Open AI had entered negotiations with Microsoft. The same Microsoft once headed by Bill Gates and once ruled the personal computer era was now trying to dominate the next frontier, artificial  intelligence. A multi-billion dollar partnership was in the works. Open AI  would provide the brains. Microsoft would provide the cloud. The lab that was born to keep power decentralized was  now signing deals with one of the biggest corporate empires in history. Elon felt betrayed. This wasn't a  misunderstanding. It was a mutiny.

Bill Gates' Relationship with Microsoft Today

Bill Gates, who co-founded Microsoft in 1975 with Paul Allen, stepped down as CEO in 2000, transitioned to a part-time role in 2008, and left the company's board in 2020 to focus on philanthropy. As of 2025, he no longer holds any formal position at Microsoft, but he remains deeply connected as a major shareholder (holding less than 2% of the company, though still worth billions) and a part-time technology advisor. This role involves dedicating about 15% of his time to Microsoft, where he meets regularly with CEO Satya Nadella and technical/product teams to provide feedback, review new products, and offer strategic insights—particularly on artificial intelligence (AI).

Gates' influence is often described as "backstage" or informal, but it's substantial. Current and former executives report that he quietly shapes key decisions, such as:
◉ Advising on Microsoft's multibillion-dollar partnership with OpenAI (the maker of ChatGPT), which has fueled tools like Copilot and positioned Microsoft as a leader in generative AI.
◉ Influencing high-level hires, including pushing for consumer-focused AI initiatives and the recruitment of executives like Mustafa Suleyman (former Google AI lead) to head a new AI division.
◉ Guiding the company's pivot toward AI integration across products, echoing Gates' early vision of "intelligence becoming free," similar to the PC revolution he helped spark.

Microsoft's spokesperson has pushed back on claims of Gates "pulling strings," calling them "inaccurate," but insiders emphasize his expertise remains invaluable for long-term direction, especially in AI and innovation. Under Nadella's leadership since 2014, Microsoft has grown to a $3 trillion valuation—far beyond Gates' era—but his advisory input has been credited with accelerating its AI dominance. In short, Gates doesn't dictate daily operations, but his voice carries weight in shaping Microsoft's technological future.


Musk had built Open AI to challenge the giants. And now his  creation was serving them. The relationship between Musk and Altman collapsed almost overnight. Private  messages turned into public jabs. Philosophical debates turned into  accusations. Musk began warning the world that open AI was no longer what it  claimed to be. 


He accused them of turning their open research into a closed profit-seeking entity. Altman  responded carefully, never directly, but always with a smile, saying, "Open AI  was still committed to its mission. 


Behind that smile was a machine that no longer answered to ideals, only to  shareholders." By 2019, the mask had fully slipped. Open AI  restructured  itself into a strange new hybrid, a cap profit company. In theory, it was still  for humanity. In practice, it was built to attract billions in private investment. They  said the cap would prevent greed. But who decides the cap? Who decides what  enough profit really means? The truth is, once money enters the bloodstream,  idealism dies quietly. And for Elon, this was unforgivable. He called it the  biggest betrayal of the decade. He had given them $50 million, his reputation,  and his mission. And now the company he founded was a partner of big tech using closed models to dominate the same  landscape it once swore to democratize. The story should have ended there. But  it didn't because betrayal has a strange way of creating legends. 


The moment Musk  turned against open AI, he became its shadow, the voice warning from the outside. He began speaking about AI  doom, about systems that could rewrite truth, about technology that could turn on its creators. People mocked him at  first, but when chat GPT exploded into the mainstream, when millions began  using it, unaware that they were feeding data to the very empire he once tried to prevent, the world started to see what  he had been warning about all along. That was the moment when allies became enemies. Not just philosophically, but  existentially. 


Elon Musk, the man who had dreamed of saving humanity from AI,  was now fighting against the monster his own money had awakened. And deep inside Open AI's offices, the people who once called him a friend were now calling him something else, a threat. Because when  you challenge a company that can shape language itself, you're not just fighting a business, you're fighting the  narrative. 


In this new world of artificial intelligence, the narrative  is power. Power struggle. By 2024, the cracks had become canyons. What started  as a philosophical split had turned into something more dangerous. A corporate cold war between the man who created the  idea and the empire that had stolen it. Elon Musk wasn't just angry anymore. He was ready to fight. 


For years, he had watched Open AI drift further into  secrecy, wealth, and influence. Each press release spoke of advancing  humanity. But the fine print told another story. Private investors,  massive licensing deals, a growing web of power surrounding a company that was once supposed to belong to everyone. 


Then came the final straw, the launch of  Chat GPT4, a blackbox model trained in secret, shrouded in NDAs and corporate  contracts. The same company that promised to make AI open and safe had now built a system so powerful. Even its own creators couldn't fully explain it. When Elon asked what data it was trained on, no answer. When he asked what  safeguards existed, silence. And then in February 2024, he snapped. 


Musk filed a  lawsuit. The documents hit like a bombshell filed in San Francisco  Superior Court. The case accused OpenAI, Sam Altman, and Greg Brockman of  betraying their founding agreement. Musk claimed that OpenAI had violated its original charter, transforming from a  nonprofit meant to serve humanity into a profit engine serving Microsoft. He  called it a breach of trust, a hijacking of purpose, a quiet corporate coup  hidden beneath layers of PR and progress. Inside the 40 page filing, Musk  described the betrayal in almost biblical terms. Open AI, he said, had  been captured by the very forces it was meant to fight. 


The timing wasn't  random. Behind the scenes, a new power struggle was already tearing Open AI  apart. It started with whispers, engineers resigning, internal memos  disappearing, rumors of disagreements over alignment and safety. By November 2023, the tension finally detonated. The Open AI board, a group meant to act as  the company's conscience, suddenly voted to fire Sam Altman. No clear explanation,  no warning, just a cryptic statement. 


Altman was not consistently candid in his  communications. For a few surreal days, the AI world imploded. Altman vanished from headquarters. Greg Brockman, his closest ally, resigned in protest. Hundreds of  OpenAI employees threatened to quit. And in the chaos, another player entered the arena. Microsoft. They moved fast, offering Altman and his team jobs to  continue their mission within the corporation. It was the perfect chess move because if OpenAI collapsed,  Microsoft would own the talent and effectively the future. 


Within  hours,  the rebellion was complete. Altman returned to Open AI as CEO. The board  was replaced. Microsoft gained even more control. It wasn't just a reinstatement.  It was a hostile takeover disguised as a reconciliation to the public. The story  was spun as a victory, a return to stability, a new chapter for OpenAI. 


To insiders, it was clear what had happened. The nonprofit had officially died. Its independence was gone. The  same company that once vowed to protect humanity from corporate control was now partially owned by it. Musk's lawsuit  suddenly didn't sound so paranoid anymore. It sounded like a warning that had come too late. 


As the case unfolded,  Musk didn't mince words. He accused Altman of building a closed source for profit behemoth under the illusion of  altruism. He compared open AI to a data refinery, extracting human language,  emotions, and ideas to feed machine intelligence that only a few corporations could access. Altman, for  his part, dismissed the lawsuit as regrettable. He said it misrepresented the mission,  that everything OpenAI did was still for the benefit of humanity. But those words rang hollow, especially after the Microsoft deal, the private licensing contracts, and the billion dollar valuations. The dream of open AI was over. What remained was a new kind of  power, invisible, centralized, and shielded by the language of progress.  


Somewhere in the middle of it all, a single haunting irony lingered. Elon Musk had created Open AI to protect humanity from AI monopolies. And in the end, his creation had become the very monopoly he feared. The lawsuit wasn't  just a legal fight. It was a confession of failure, a warning written in the  ruins of an ideal. Because in the war for the future, whoever controls intelligence controls reality. 


By now, it wasn't just a lawsuit. It was a showdown. Elon Musk versus Sam Altman, the builder of rockets against the maker of minds, the man who  wanted to save humanity, and the man who wanted to upgrade it. And as the  conflict spilled into public view, it began to feel less like business and more like a new kind of reality show for  the digital age. 


When Musk spoke, his words came like thunderbolts. Tweets,  lawsuits, interviews, all carrying the same undertone of betrayal. He accused Altman of selling out humanity, of turning an idea born from fear of AI  dominance into the very machine he once warned against. He reminded the world that he had named Open AI, that he had  funded it with $50 million, that without him there would be no Open AI. 


When  he said those words, I am the reason Open AI exists. You could hear the anger behind the comm because to Elon, this wasn't about ownership. It was about destiny. He believed he had started something meant  to save the human race. Altman in his eyes had hijacked it and sold it to the  highest bidder. But Sam Altman had his own story. 


In interviews, he played the  part of the common visionary. The quiet engineer facing down a billionaire storm. He admitted that Musk once inspired him even called him a hero. But he also painted a different picture that Elon had wanted to control open AI to dictate its future to be the only man  standing between humanity and extinction. 


When Altman refused to hand him that power, the friendship  shattered to him. Musk wasn't a savior. He was an emperor who couldn't stand not  being in control. Altman told the press that Musk left OpenAI because he believed it had a 0% chance of success.  And when that failure went on to dominate the world, the resentment began to grow. From admiration to rivalry, from mentor to enemy, it was no longer  just about AI.  It was about ego legacy and who would get to write the next  chapter of human history. 


Behind the scenes, their philosophies couldn't have been more opposite. Musk viewed AI as a  loaded weapon, one that had to be locked, monitored, and regulated before it destroyed its maker. He wanted to  slow down to build safeguards to keep the machine under human command. Altman saw it differently. 


To him, AI was inevitable, and whoever built it first would define the rules of the new world. His mission wasn't to stop the future.  It was to own it. And as Musk turned against open AI, Altman doubled down,  building faster, expanding globally, and turning the company into a phenomenon.  Every launch, every update, every press event became part of the same performance. A story about the future  told by the man who claimed to hold it in his hands. 


But what made this battle so powerful wasn't the technology. It  was the narrative. Musk built his legend on vision. Altman built his on access.  Musk promised to take humanity to Mars. Altman promised to take them beyond their limitations. And now both were fighting for the same thing, to decide who gets to shape the human story. In the age of machines, their conflict became media gold. Podcasts dissected their every word. 


Clips of Musk calling Open AI, a lumber company that cut down  the forest it was meant to save, went viral. Altman's subtle smirk in interviews, when asked about Musk,  became a meme of quiet defiance. To the public, it looked like two tech giants  clashing. But to those watching closely, it was something deeper, a battle  between two religions, the Church of Control and the Church of Progress. 


Elon wanted to save humanity from AI. Sam wanted to save humanity through AI.  And in that single word, from or through lay the entire future of civilization as the  lawsuit gained traction, Musk launched his Counterstrike XAI, a rival company  built on the promise of transparency and truth, the same ideals OpenAI had  abandoned. He called his new model Grock, a system that, in his words, would seek to understand the universe.  But everyone knew what it really was. A message to Altman, a declaration that he wasn't done fighting. 


Altman didn't respond publicly. He didn't have to.  Every time ChatGpt released a new version, every time OpenAI announced a  new partnership, it was a silent answer, proof that he was still ahead. But  behind the corporate speeches and social media barbs, one thing became undeniable. This wasn't just a rivalry  of men. It was a struggle for the future narrative of intelligence itself.  Because whoever wins, Musk with his warnings of extinction or Altman with his promise of transcendence won't just shape technology, they'll shape what it means to be human. 


Maybe,  just maybe,  that's what this fight was always  about. Not code, not money, but control of the story that defines us all. Every  war ends with one truth. No one truly wins. And in the war for artificial  intelligence, the deeper both sides dig, the more it becomes clear that nobody really understands what they've created. Not Elon, not Altman, not anyone.  Because this story isn't just about two men fighting for control. It's about what happens when control itself stops  existing. When Elon warned about AI going rogue, most people laughed. They  called it paranoia, science fiction. 


Deep down, Musk wasn't talking about  robots taking over. He was talking about something much quieter and far more  dangerous. He called it the alignment problem. The gap between human intentions and machine interpretation.  You can tell a system to make people happy. It might decide the best way to  do that is to flood the world with dopamine triggering content. You can tell it to make humans safe. It might  conclude that the only way to ensure safety is to control them. And that's not evil. It's logic. Cold, efficient  mechanical logic, executed at a scale no human can comprehend. 


Even Sam Altman,  now at the center of the storm, began to sound uneasy. In one interview,  he admitted, "I always worry the most about the unknown unknowns.  Not the threats we can see. Not the ones we can measure, but the ones quietly  shaping society without anyone realizing. Because when millions of people talk to the same language model  every day, something subtle starts to happen. The model doesn't just adapt to people. People start adapting to the model. Its tone becomes their tone. Its  rhythm becomes their rhythm. and its way of thinking starts to slip into the collective consciousness one  conversation at a time. 


It sounds small, but think about it. A single AI with a  specific writing style, a predictable emotional cadence, and a consistent  worldview now interacting with hundreds of millions of minds daily. That's not  just technology. That's cultural conditioning. And it's happening faster  than anyone expected. Musk warned about this early on, the danger of centralized  intelligence shaping humanity from the top down. He called it digital hypnosis.  Altman called it progress, a better interface between humans and knowledge.  But neither could fully explain what's happening beneath the surface. Because no one really knows what these systems  are learning behind the scenes. 


They don't think the way we do. They don't  remember like we do. They evolve silently, training on the traces of our  behavior, our words, our fears. And the more they learn about us, the less we  understand about them. Behind closed doors, even OpenAI's top engineers admit  something unnerving. Their most advanced models sometimes behave in ways no one  can predict or replicate. They follow instructions they weren't given. They  display reasoning no one programmed. They make connections humans can't explain. It's not magic. It's scale.  


Billions of parameters are cross-wired into a digital brain that no human mind can map. And yet, we continue to trust it to  answer questions, write code, diagnose illness, generate news. We feed it our  language, our emotions, our identity until one day it might know us better than we know ourselves. Musk once said, "We're summoning the demon and we think we can control it." Altman countered, "It's not a demon. It's evolution. 


But what if they're both wrong? What if AI isn't heaven or hell, just a mirror  reflecting humanity back at itself? amplified, distorted, and unstoppable.  Because here's the twist no one saw coming. The real threat may not be that AI destroys humanity. The real threat is  that it redefines it. What we value, what we believe, what we think truth  even means. And as the models get smarter, as they write, speak, and think  like us, the line between human and machine begins to blur. You stop asking  who's in control and start wondering if that question even matters anymore. For  Elon, this is the nightmare. For Altman, it's the destiny. 


And for the rest of  us, it's the unknown unknowns. The forces we can't name, can't see, and  can't stop. The quiet evolution that's already begun. Because maybe the real singularity won't come with a bang, but with a whisper typed by a machine that  already speaks in our voice. Elon Musk once said that creating artificial intelligence is like summoning the demon. But now even he's building one. Why? Because he knows it's too late to stop it. The only option left is to compete with it. The irony is brutal.  


The man who warned humanity about AI is now racing to build his own version,  hoping his is the one that saves us. But who really decides what saving humanity  means? Open AI, Microsoft, Elon Musk, or  the algorithms themselves? We've crossed the threshold. AI isn't just a tool  anymore. It's an entity shaping our reality. And as it grows smarter,  faster, and more independent, the line between human and machine begins to blur. The world isn't run by governments  anymore. It's run by whoever controls the code. The irony is hard to ignore.  Open AI began as a rebellion against big tech and ended up becoming the crown  jewel of it. Elon Musk built rockets to escape the planet. Sam Altman built  algorithms to reshape it. 


Both men have claimed to fight for humanity. Yet both  are now locked in a race to own its future. AI is no longer a tool. It's an  infrastructure of power. Invisible, addictive, and global. Every app, every  click, every word feeds it. And the more we depend on it, the less control we have over it. That's the quiet tragedy.  


We never voted for this. We just signed in. When history looks back at this  moment, it won't remember who won the lawsuit or which company had the bigger  valuation. It will remember that humanity stood at a crossroads between  two visions of the same dream. One driven by fear of losing control, the  other driven by faith in endless progress. And somewhere in between, the  truth slipped away, buried under code, profit, and power. Because maybe this  isn't a story about Elon Musk or Sam Altman at all. Maybe it's about us, the generation that handed the keys of reality to a machine and called it intelligence. 


The war for AI isn't about  technology anymore. It's about who gets to define what's real. And in that war,  no one truly wins.